Over the recent years, the adoption of Mobile Edge Computing (MEC) has increased due to its ability to bring Computing resources closer to end-users, which includes storage, Computing, and networking at the network's Edge. This approach results in faster and more efficient data processing, reduced latency, and better overall performance for mobile device applications. Our aim in this study is to evaluate the effectiveness of using reinforcement learning algorithms, namely Deep Q-Network (DQN) and Asynchronous Advantage Actor-Critic (A3C), in optimizing the performance of web applications in MEC environments, such as latency, CPU usage, and memory utilization. We conducted experiments using a sample dataset and compared the performance of models with and without MEC. The results demonstrate that the use of MEC substantially improves the performance of web applications. Both DQN and A3C algorithms exhibit promising results in improving the latency of web applications in MEC environments. However, the A3C algorithm outperforms the DQN algorithm in terms of CPU utilization and memory usage. Overall, our study highlights the potential of reinforcement learning algorithms in improving the performance of MEC-based web applications.